partial domain adaptation
- North America > United States > Massachusetts > Middlesex County > Waltham (0.07)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.06)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Waltham (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
Adversarial Reweighting for Partial Domain Adaptation
Partial domain adaptation (PDA) has gained much attention due to its practical setting. The current PDA methods usually adapt the feature extractor by aligning the target and reweighted source domain distributions. In this paper, we experimentally find that the feature adaptation by the reweighted distribution alignment in some state-of-the-art PDA methods is not robust to the ``noisy'' weights of source domain data, leading to negative domain transfer on some challenging benchmarks. To tackle the challenge of negative domain transfer, we propose a novel Adversarial Reweighting (AR) approach that adversarially learns the weights of source domain data to align the source and target domain distributions, and the transferable deep recognition network is learned on the reweighted source domain data. Based on this idea, we propose a training algorithm that alternately updates the parameters of the network and optimizes the weights of source domain data. Extensive experiments show that our method achieves state-of-the-art results on the benchmarks of ImageNet-Caltech, Office-Home, VisDA-2017, and DomainNet. Ablation studies also confirm the effectiveness of our approach.
Implicit Semantic Response Alignment for Partial Domain Adaptation
Partial Domain Adaptation (PDA) addresses the unsupervised domain adaptation problem where the target label space is a subset of the source label space. Most state-of-art PDA methods tackle the inconsistent label space by assigning weights to classes or individual samples, in an attempt to discard the source data that belongs to the irrelevant classes. However, we believe samples from those extra categories would still contain valuable information to promote positive transfer. In this paper, we propose the Implicit Semantic Response Alignment to explore the intrinsic relationships among different categories by applying a weighted schema on the feature level. Specifically, we design a class2vec module to extract the implicit semantic topics from the visual features. With an attention layer, we calculate the semantic response according to each implicit semantic topic. Then semantic responses of source and target data are aligned to retain the relevant information contained in multiple categories by weighting the features, instead of samples. Experiments on several cross-domain benchmark datasets demonstrate the effectiveness of our method over the state-of-the-art PDA methods. Moreover, we elaborate in-depth analyses to further explore implicit semantic alignment.
Adversarial Reweighting for Partial Domain Adaptation Supplementary Material
The comparisons of the typical PDA methods are given in Table S-1. Mean Discrepancy (MMD) and the Jensen-Shannon (JS) divergence. ImageNet-Caltech, and VisDA-2017 are shown in Table S-4. This section illustrates the details for computing the Wasserstein distance discussed in Sect. With Eq. (S-7), the Wasserstein distance can be approximated by W (µ, ν) E Algorithm 1 presents the pseudo-code of our training algorithm in Sect.
- Asia > Middle East > Jordan (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
Implicit Semantic Response Alignment for Partial Domain Adaptation (Supplementary Material)
All domains include a great number (345) of categories of objects such as Bracelet, plane, bird and cello. We take the "synthetic" (S) training domain and the "real" We adopt the same hyperparameters as Office-Home in following experiments. R. As expected, class car, which is semantically similar Whereas class horse suffers a 17.01%
- North America > United States > Massachusetts > Middlesex County > Waltham (0.07)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.06)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Waltham (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Information Technology > Artificial Intelligence > Vision (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
Partial Domain Adaptation via Importance Sampling-based Shift Correction
Guo, Cheng-Jun, Ren, Chuan-Xian, Luo, You-Wei, Xu, Xiao-Lin, Yan, Hong
Partial domain adaptation (PDA) is a challenging task in real-world machine learning scenarios. It aims to transfer knowledge from a labeled source domain to a related unlabeled target domain, where the support set of the source label distribution subsumes the target one. Previous PDA works managed to correct the label distribution shift by weighting samples in the source domain. However, the simple reweighing technique cannot explore the latent structure and sufficiently use the labeled data, and then models are prone to over-fitting on the source domain. In this work, we propose a novel importance sampling-based shift correction (IS$^2$C) method, where new labeled data are sampled from a built sampling domain, whose label distribution is supposed to be the same as the target domain, to characterize the latent structure and enhance the generalization ability of the model. We provide theoretical guarantees for IS$^2$C by proving that the generalization error can be sufficiently dominated by IS$^2$C. In particular, by implementing sampling with the mixture distribution, the extent of shift between source and sampling domains can be connected to generalization error, which provides an interpretable way to build IS$^2$C. To improve knowledge transfer, an optimal transport-based independence criterion is proposed for conditional distribution alignment, where the computation of the criterion can be adjusted to reduce the complexity from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2)$ in realistic PDA scenarios. Extensive experiments on PDA benchmarks validate the theoretical results and demonstrate the effectiveness of our IS$^2$C over existing methods.
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- Europe > Greece (0.04)
- Asia > China > Hong Kong > Kowloon (0.04)
Bi-level Unbalanced Optimal Transport for Partial Domain Adaptation
Chen, Zi-Ying, Ren, Chuan-Xian, Yan, Hong
Partial domain adaptation (PDA) problem requires aligning cross-domain samples while distinguishing the outlier classes for accurate knowledge transfer. The widely used weighting framework tries to address the outlier classes by introducing the reweighed source domain with a similar label distribution to the target domain. However, the empirical modeling of weights can only characterize the sample-wise relations, which leads to insufficient exploration of cluster structures, and the weights could be sensitive to the inaccurate prediction and cause confusion on the outlier classes. To tackle these issues, we propose a Bi-level Unbalanced Optimal Transport (BUOT) model to simultaneously characterize the sample-wise and class-wise relations in a unified transport framework. Specifically, a cooperation mechanism between sample-level and class-level transport is introduced, where the sample-level transport provides essential structure information for the class-level knowledge transfer, while the class-level transport supplies discriminative information for the outlier identification. The bi-level transport plan provides guidance for the alignment process. By incorporating the label-aware transport cost, the local transport structure is ensured and a fast computation formulation is derived to improve the efficiency. Introduction Traditional machine learning usually follows the assumption that training data and test data come from the same distribution. Corresponding author Email address: rchuanx@mail.sysu.edu.cn This distribution discrepancy can degrade the performance of machine learning models when they are deployed in new environments or domains. To overcome this challenge, unsupervised domain adaptation (UDA) [1, 2] has been developed to transfer knowledge from the labeled source domain to the unlabeled target domain, enabling the models trained on the source domain that can generalize well to the target domain. Usually, UDA methods train the model using source domain samples to minimize the source domain classification error and then use appropriate methods to eliminate the cross-domain divergence.
- Asia > Middle East > Jordan (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- North America > United States > California (0.04)
- (3 more...)
Theoretical Performance Guarantees for Partial Domain Adaptation via Partial Optimal Transport
Naram, Jayadev, Hellström, Fredrik, Wang, Ziming, Jörnsten, Rebecka, Durisi, Giuseppe
In many scenarios of practical interest, labeled data from a target distribution are scarce while labeled data from a related source distribution are abundant. One particular setting of interest arises when the target label space is a subset of the source label space, leading to the framework of partial domain adaptation (PDA). Typical approaches to PDA involve minimizing a domain alignment term and a weighted empirical loss on the source data, with the aim of transferring knowledge between domains. However, a theoretical basis for this procedure is lacking, and in particular, most existing weighting schemes are heuristic. In this work, we derive generalization bounds for the PDA problem based on partial optimal transport. These bounds corroborate the use of the partial Wasserstein distance as a domain alignment term, and lead to theoretically motivated explicit expressions for the empirical source loss weights. Inspired by these bounds, we devise a practical algorithm for PDA, termed WARMPOT. Through extensive numerical experiments, we show that WARMPOT is competitive with recent approaches, and that our proposed weights improve on existing schemes.
- Europe > United Kingdom > England > Greater London > London (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (17 more...)